Fork me on GitHub

mapreduce 错误 The required MAP capability is more than the supported max container capability in the cluster

具体错误

1
2
The required MAP capability is more than the supported max container capability in the cluster. Killing the Job. mapResourceRequest: <memory:3072, vCores:1> maxContainerCapability:<memory:1460, vCores:1>
Job received Kill while in RUNNING state.

此错导致 job 被 kill

解决方法

解答1

https://stackoverflow.com/questions/25878458/rhadoop-reduce-capability-required-is-more-than-the-supported-max-container-cap

I have not used RHadoop. However I’ve had a very similar problem on my cluster, and this problem seems to be linked only to MapReduce.

The maxContainerCapability in this log refers to the yarn.scheduler.maximum-allocation-mb property of your yarn-site.xml configuration. It is the maximum amount of memory that can be used in any container.

The mapResourceReqt and reduceResourceReqt in your log refer to the mapreduce.map.memory.mb and mapreduce.reduce.memory.mb properties of your mapred-site.xml configuration. It is the memory size of the containers that will be created for a Mapper or a Reducer in mapreduce.

If the size of your Reducer’s container is set to be greater than yarn.scheduler.maximum-allocation-mb, which seems to be the case here, your job will be killed because it is not allowed to allocate so much memory to a container.

Check your configuration at http://[your-resource-manager]:8088/conf and you should normally find these values and see that this is the case.

Maybe your new environment has these values set to 4096 Mb (which is quite big, the default in Hadoop 2.7.1 being 1024).

Solution

You should either lower the mapreduce.[map|reduce].memory.mb values down to 1024, or if you have lots of memory and want huge containers, raise the yarn.scheduler.maximum-allocation-mb value to 4096. Only then MapReduce be able to create containers.

I hope this helps.

解答2

https://stackoverflow.com/questions/25753983/how-do-you-change-the-max-container-capability-in-hadoop-cluster

To do this on Hortonworks 2.1, I had to

increase VirtualBox memory from 4096 to 8192 (don’t know if that was strictly necessary)
Enabled Ambari from http://my.local.host:8000
Log into Ambari from http://my.local.host:8080
change the values of yarn.nodemanager.resource.memory-mb and yarn.scheduler.maximum-allocation-mb from the defaults to 4096
Save and restart everything (via Ambari)
This got me past the “capability required” errors, but the actual wordcount.R doesn’t seem to want to complete. Things like hdfs.ls(“/data”) do work, however

简而言之:yarn-site.xml 中的yarn.scheduler.maximum-allocation-mb yarn.nodemanager.resource.memory-mb 配置的值 >= mapred-site.xml 中mapreduce.map.memory.mb、mapreduce.reduce.memory.mb 的值

参考:

Yarn最佳实践 http://blog.csdn.net/jiangshouzhuang/article/details/52595781

------------- The endThanks for reading-------------